2,703 research outputs found

    Implementing measures of cost control (food & labour) for small restaurant businesses

    Get PDF
    In 2012, Auckland Region Restaurant Record estimated that there were 2000 restaurants in the Auckland area. Therefore, with fierce competition in the market, it is important for small restaurants to think how they can save costs and be more competitive. This research aims to identify feasible measurements for small restaurant businesses to control food and labour costs in order to possess advantages in the competitive environment. This research contains both quantitative and qualitative research by convenience-based sampling methods. The researched restaurants are located in Hamilton. To analyse the answers from the participants, the responses will be compared with results shown in the literature review and displayed as graphs. Due to limited time and resources, the sample selecting, size and location will be limited. The tentative results from the research illustrate that few of the researched restaurants take measures to deal with food waste and leftovers. They usually dispose of waste into the rubbish bin. Apart from participants who are unaware of inventory storage systems, they adopt FIFO as their storage inventory method. Moreover, employees are not happy if their wages/working hours are cut down, and some of them will decrease their working quality or efficiency. Consequently, restaurants need to balance between employee wages and work efficiency

    Performance optimization and energy efficiency of big-data computing workflows

    Get PDF
    Next-generation e-science is producing colossal amounts of data, now frequently termed as Big Data, on the order of terabyte at present and petabyte or even exabyte in the predictable future. These scientific applications typically feature data-intensive workflows comprised of moldable parallel computing jobs, such as MapReduce, with intricate inter-job dependencies. The granularity of task partitioning in each moldable job of such big data workflows has a significant impact on workflow completion time, energy consumption, and financial cost if executed in clouds, which remains largely unexplored. This dissertation conducts an in-depth investigation into the properties of moldable jobs and provides an experiment-based validation of the performance model where the total workload of a moldable job increases along with the degree of parallelism. Furthermore, this dissertation conducts rigorous research on workflow execution dynamics in resource sharing environments and explores the interactions between workflow mapping and task scheduling on various computing platforms. A workflow optimization architecture is developed to seamlessly integrate three interrelated technical components, i.e., resource allocation, job mapping, and task scheduling. Cloud computing provides a cost-effective computing platform for big data workflows where moldable parallel computing models are widely applied to meet stringent performance requirements. Based on the moldable parallel computing performance model, a big-data workflow mapping model is constructed and a workflow mapping problem is formulated to minimize workflow makespan under a budget constraint in public clouds. This dissertation shows this problem to be strongly NP-complete and designs i) a fully polynomial-time approximation scheme for a special case with a pipeline-structured workflow executed on virtual machines of a single class, and ii) a heuristic for a generalized problem with an arbitrary directed acyclic graph-structured workflow executed on virtual machines of multiple classes. The performance superiority of the proposed solution is illustrated by extensive simulation-based results in Hadoop/YARN in comparison with existing workflow mapping models and algorithms. Considering that large-scale workflows for big data analytics have become a main consumer of energy in data centers, this dissertation also delves into the problem of static workflow mapping to minimize the dynamic energy consumption of a workflow request under a deadline constraint in Hadoop clusters, which is shown to be strongly NP-hard. A fully polynomial-time approximation scheme is designed for a special case with a pipeline-structured workflow on a homogeneous cluster and a heuristic is designed for the generalized problem with an arbitrary directed acyclic graph-structured workflow on a heterogeneous cluster. This problem is further extended to a dynamic version with deadline-constrained MapReduce workflows to minimize dynamic energy consumption in Hadoop clusters. This dissertation proposes a semi-dynamic online scheduling algorithm based on adaptive task partitioning to reduce dynamic energy consumption while meeting performance requirements from a global perspective, and also develops corresponding system modules for algorithm implementation in the Hadoop ecosystem. The performance superiority of the proposed solutions in terms of dynamic energy saving and deadline missing rate is illustrated by extensive simulation results in comparison with existing algorithms, and further validated through real-life workflow implementation and experiments using the Oozie workflow engine in Hadoop/YARN systems

    A Study of Corporate Managerial Module and Information System Orienting Agile Virtual Enterprises

    Get PDF
    The operation modules and information system construction are essential contents in agile virtual enterprises. Based on the analyses of the characteristics, organization forms and operation processes in agile virtual enterprises, studies are extended to the corresponding information systems, and the development of systematic logic modules and kernel methodology is also proposed in the paper
    • …
    corecore